██████╗ ██╗███╗ ██╗███████╗████████╗██████╗ ██╗██╗ ██╗███████╗
██╔══██╗██║████╗ ██║██╔════╝╚══██╔══╝██╔══██╗██║██║ ██╔╝██╔════╝
██████╔╝██║██╔██╗ ██║███████╗ ██║ ██████╔╝██║█████╔╝ █████╗
██╔══██╗██║██║╚██╗██║╚════██║ ██║ ██╔══██╗██║██╔═██╗ ██╔══╝
██████╔╝██║██║ ╚████║███████║ ██║ ██║ ██║██║██║ ██╗███████╗
╚═════╝ ╚═╝╚═╝ ╚═══╝╚══════╝ ╚═╝ ╚═╝ ╚═╝╚═╝╚═╝ ╚═╝╚══════╝
AI-Powered Penetration Testing Platform
BinStrike is an intelligent, production-grade platform that augments penetration testing workflows with AI assistance. It combines a powerful knowledge graph, AI-powered analysis, neuro-symbolic guardrails, multi-factor authentication, and microservices architecture—all while keeping the human tester in full control.
- Getting Started
- Features Overview
- Architecture
- Complete Installation Guide
- Configuration Reference
- Authentication & Security
- API Reference
- MCP Integration
- Tools Reference
- Usage Examples
- Development & Testing
- Troubleshooting
- License
Choose your deployment mode:
# 1. Clone repository
git clone https://github.com/your-org/binstrike.git
cd binstrike
# 2. Create and activate virtual environment
python3 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# 3. Install BinStrike
pip install -e ".[all]"
# 4. Run CLI
binstrike# 1. Clone repository
git clone https://github.com/your-org/binstrike.git
cd binstrike
# 2. Copy environment template
cp .env.example .env
# 3. Generate secure secrets and update .env
python3 -c "import secrets; print('JWT_SECRET=' + secrets.token_urlsafe(64))"
# Copy the output and paste into your .env file
# 4. Edit .env file with your passwords
nano .env # or use any text editor
# 5. Start all services
chmod +x start-production.sh
./start-production.sh start
# 6. Access API
curl http://localhost:8000/healthThis section provides complete step-by-step instructions for production deployment.
Ensure you have the following installed:
# Check Docker version (24.0+ required)
docker --version
# Check Docker Compose version (2.20+ required)
docker compose version
# Check available memory (8GB+ recommended)
free -hIf Docker is not installed:
# Ubuntu/Debian
curl -fsSL https://get.docker.com | sh
sudo usermod -aG docker $USER
# Log out and back in, then verify:
docker run hello-worldgit clone https://github.com/your-org/binstrike.git
cd binstrike# Copy the example environment file
cp .env.example .envOpen .env in your text editor and configure the following:
nano .env # or: code .env, vim .env, etc.# =============================================================================
# DATABASE - PostgreSQL
# =============================================================================
POSTGRES_USER=binstrike
POSTGRES_PASSWORD=<YOUR_SECURE_PASSWORD> # Change this!
POSTGRES_DB=binstrike
POSTGRES_PORT=5432
# =============================================================================
# GRAPH DATABASE - Neo4j
# =============================================================================
NEO4J_USER=neo4j
NEO4J_PASSWORD=<YOUR_SECURE_PASSWORD> # Change this!
NEO4J_HTTP_PORT=7474
NEO4J_BOLT_PORT=7687
# =============================================================================
# CACHE - Redis
# =============================================================================
REDIS_PASSWORD=<YOUR_SECURE_PASSWORD> # Change this!
REDIS_PORT=6379
# =============================================================================
# MESSAGE QUEUE - RabbitMQ
# =============================================================================
RABBITMQ_USER=binstrike
RABBITMQ_PASSWORD=<YOUR_SECURE_PASSWORD> # Change this!
RABBITMQ_VHOST=binstrike
RABBITMQ_PORT=5672
RABBITMQ_MGMT_PORT=15672
# =============================================================================
# AUTHENTICATION - CRITICAL!
# =============================================================================
# Generate with: python3 -c "import secrets; print(secrets.token_urlsafe(64))"
JWT_SECRET=<YOUR_64_CHARACTER_SECRET> # Change this!
JWT_ALGORITHM=HS256
ACCESS_TOKEN_EXPIRE_MINUTES=30
REFRESH_TOKEN_EXPIRE_DAYS=7Generate secure passwords using this command:
# Generate a secure random password
python3 -c "import secrets; print(secrets.token_urlsafe(32))"
# Generate JWT secret (64 characters)
python3 -c "import secrets; print(secrets.token_urlsafe(64))"Optional AI Provider Configuration:
# =============================================================================
# AI PROVIDERS
# =============================================================================
AI_PROVIDER=ollama # Options: ollama, anthropic, openai
OLLAMA_BASE_URL=http://localhost:11434 # Ollama server URL
# For cloud AI providers (optional):
ANTHROPIC_API_KEY=sk-ant-... # If using Claude
OPENAI_API_KEY=sk-... # If using GPT-4Full .env Example (with secure passwords):
# Database
POSTGRES_USER=binstrike
POSTGRES_PASSWORD=xK9mP2vL8nQ4wR7jT1uY5zB3cF6hA0sE
POSTGRES_DB=binstrike
POSTGRES_PORT=5432
# Neo4j
NEO4J_USER=neo4j
NEO4J_PASSWORD=yM3kW9pL2nJ7tR4vX1bQ8cH5gD0fA6sZ
NEO4J_HTTP_PORT=7474
NEO4J_BOLT_PORT=7687
# Redis
REDIS_PASSWORD=uT8kN2mL5pQ9wR3jX7vB1cY4hG0fA6sZ
REDIS_PORT=6379
# RabbitMQ
RABBITMQ_USER=binstrike
RABBITMQ_PASSWORD=zQ5wE8rT2yU9iO3pA7sD1fG6hJ4kL0mN
RABBITMQ_VHOST=binstrike
RABBITMQ_PORT=5672
RABBITMQ_MGMT_PORT=15672
# JWT Authentication
JWT_SECRET=xK9mP2vL8nQ4wR7jT1uY5zB3cF6hA0sEyM3kW9pL2nJ7tR4vX1bQ8cH5gD0fA6sZuT8kN
JWT_ALGORITHM=HS256
ACCESS_TOKEN_EXPIRE_MINUTES=30
REFRESH_TOKEN_EXPIRE_DAYS=7
# AI Provider
AI_PROVIDER=ollama
OLLAMA_BASE_URL=http://localhost:11434
# Service Ports
API_GATEWAY_PORT=8000
SESSION_SERVICE_PORT=8001
TOOL_SERVICE_PORT=8002
AI_SERVICE_PORT=8003
GRAPH_SERVICE_PORT=8004
REPORT_SERVICE_PORT=8005
# Other settings
TOOL_TIMEOUT=600
LOG_LEVEL=INFO
RATE_LIMIT=100# Make the startup script executable
chmod +x start-production.sh
# Start all services (will prompt if ports are in use)
./start-production.sh start
# OR: Start with --force flag to auto-kill conflicting processes
./start-production.sh start --forceNote on Port Conflicts: If you have local services running (e.g., PostgreSQL on 5432, Redis on 6379), the script will detect these and ask if you want to kill them. Use --force to automatically attempt to free the ports, or update your .env file to use different ports (default: PostgreSQL=5433, Redis=6380).
You should see output like:
[INFO] Starting BinStrike Production Stack...
[INFO] Starting infrastructure services (PostgreSQL, Neo4j, Redis, RabbitMQ)...
[INFO] Starting microservices...
[INFO] All services started!
================================================================
BINSTRIKE PRODUCTION STACK STATUS
================================================================
Infrastructure Services:
Service URL/Address Status
------- ----------- ------
PostgreSQL localhost:5432 🟢 Up
Neo4j Browser http://localhost:7474 🟢 Up
Neo4j Bolt bolt://localhost:7687 🟢 Up
Redis localhost:6380 🟢 Up
RabbitMQ Mgmt http://localhost:15672 🟢 Up
RabbitMQ AMQP localhost:5672 🟢 Up
Microservices API:
API Gateway (Main) http://localhost:8000 🟢 Up
Quick Links:
• Health Check: http://localhost:8000/health
• Neo4j Browser: http://localhost:7474
• RabbitMQ Console: http://localhost:15672
================================================================
# Check all services are running
./start-production.sh status
# Test the API health endpoint
curl http://localhost:8000/healthExpected response:
{
"status": "healthy",
"timestamp": "2026-01-23T00:00:00Z",
"services": {
"auth": {"status": "healthy"},
"tools": {"status": "healthy"},
"ai": {"status": "healthy"},
"graph": {"status": "healthy"},
"reports": {"status": "healthy"}
}
}The system creates a default admin account on first startup:
# Login with default credentials
curl -X POST http://localhost:8000/auth/login \
-H "Content-Type: application/json" \
-d '{
"email": "admin@binstrike.local",
"password": "Admin@BinStrike2024!"
}'For air-gapped/local AI capabilities:
# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
# Download a model (llama3 recommended)
ollama pull llama3:8b
# Start Ollama server (if not auto-started)
ollama serveThe start-production.sh script provides these commands:
| Command | Description |
|---|---|
./start-production.sh start |
Start all services (infrastructure + microservices) |
./start-production.sh start --force |
Start all services, auto-kill conflicting port processes |
./start-production.sh stop |
Stop all services |
./start-production.sh restart |
Restart all services |
./start-production.sh status |
Show running containers and their status |
./start-production.sh logs |
View combined logs from all services |
./start-production.sh logs <service> |
View logs for specific service (e.g., logs session-service) |
./start-production.sh build |
Rebuild all Docker images |
./start-production.sh clean |
Stop and remove all containers and volumes ( |
./start-production.sh dev |
Start infrastructure only (for local development) |
./start-production.sh dev --force |
Start infrastructure, auto-kill conflicting processes |
Flags:
--forceor-f: Automatically kill processes using required ports without prompting
| Feature | Description |
|---|---|
| 🤖 AI Assistant | Multi-provider LLM support (Claude, GPT-4, Ollama local), context-aware suggestions |
| 🗺️ Knowledge Graph | Neo4j-powered tracking of hosts, services, vulnerabilities, credentials |
| 🔗 Attack Path Reasoning | AI-powered attack chain analysis with MITRE ATT&CK mapping |
| 🛡️ Symbolic Guardrails | Validates commands before execution, blocks dangerous operations |
| 🔧 Tool Orchestration | 50+ integrated security tools with async Celery execution |
| 📝 Auto-Documentation | Real-time field notes, professional report generation (MD/JSON/GhostWriter) |
| 🔐 Enterprise Security | JWT auth, TOTP MFA, RBAC with 40+ permissions, bcrypt password hashing |
| 🔌 MCP Protocol | Model Context Protocol integration for Claude, Cursor IDE |
| 🐳 Container Security | Kubernetes/Docker security benchmarks (kube-bench, trivy, grype) |
| ☁️ Cloud Security | AWS/Azure/GCP assessment (prowler, scoutsuite, checkov) |
| 🏗️ Microservices | Production-ready Docker Compose with 6 services, RabbitMQ, Redis |
| 📊 Web Dashboard | D3.js visualization of knowledge graph and attack paths |
| 🏠 Air-Gapped Mode | Run entirely on-premise with Ollama for local LLMs |
┌─────────────────┐
│ API Gateway │
│ (Port 8000) │
│ │
│ • Rate limiting │
│ • Request proxy │
│ • Health agg │
└────────┬────────┘
│
┌─────────────────────────────┼─────────────────────────────┐
│ │ │
▼ ▼ ▼
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Session Service │ │ Tool Service │ │ AI Service │
│ (Port 8001) │ │ (Port 8002) │ │ (Port 8003) │
│ │ │ │ │ │
│ • User login │ │ • Tool registry │ │ • LLM queries │
│ • JWT tokens │ │ • Async exec │ │ • Recommendations│
│ • MFA/TOTP │ │ • Container │ │ • Analysis │
│ • RBAC │ │ sandboxing │ │ • Multi-provider│
│ • Engagements │ │ • Output parse │ │ │
└────────┬────────┘ └────────┬────────┘ └────────┬────────┘
│ │ │
▼ ▼ ▼
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ PostgreSQL │ │ RabbitMQ │ │ Ollama │
│ (Port 5432) │ │ (Port 5672) │ │ (Port 11434) │
│ │ │ │ │ │
│ • Users │ │ • Task queue │ │ • Local LLM │
│ • Engagements │ │ • Celery broker │ │ • Air-gapped │
│ • Findings │ │ │ │ │
│ • Sessions │ └────────┬────────┘ └─────────────────┘
└─────────────────┘ │
▼
┌─────────────────┐
│ Celery Workers │
│ │
│ • Tool exec │
│ • Container run │
│ • Output parse │
└─────────────────┘
┌─────────────────────────────────────────────────────────┐
│ │
▼ ▼ ▼
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Graph Service │ │ Report Service │ │ Redis │
│ (Port 8004) │ │ (Port 8005) │ │ (Port 6379) │
│ │ │ │ │ │
│ • Neo4j CRUD │ │ • MD/JSON/GW │ │ • Session cache │
│ • Attack paths │ │ • Templates │ │ • Token blacklist│
│ • Visualization │ │ • Export │ │ • Rate limits │
└────────┬────────┘ └─────────────────┘ └─────────────────┘
│
▼
┌─────────────────┐
│ Neo4j │
│ (Port 7687) │
│ │
│ • Hosts │
│ • Services │
│ • Vulns │
│ • Credentials │
│ • Relationships │
└─────────────────┘
| Service | Port | Description |
|---|---|---|
| API Gateway | 8000 | Main entry point for all API requests |
| Session Service | 8001 | Authentication, users, engagements |
| Tool Service | 8002 | Security tool execution |
| AI Service | 8003 | LLM queries and recommendations |
| Graph Service | 8004 | Neo4j knowledge graph operations |
| Report Service | 8005 | Report generation and export |
| PostgreSQL | 5432 | Primary database |
| Neo4j HTTP | 7474 | Neo4j browser interface |
| Neo4j Bolt | 7687 | Neo4j database protocol |
| Redis | 6379 | Cache and session storage |
| RabbitMQ | 5672 | Message queue |
| RabbitMQ Mgmt | 15672 | RabbitMQ management console |
| Ollama | 11434 | Local LLM server |
For single-user development or testing:
# 1. Clone repository
git clone https://github.com/your-org/binstrike.git
cd binstrike
# 2. Create virtual environment
python3 -m venv venv
source venv/bin/activate
# 3. Install with all extras
pip install -e ".[all]"
# 4. Create config directory
mkdir -p ~/.binstrike
# 5. Create configuration file
cat > ~/.binstrike/config.yaml << 'EOF'
ai:
provider: ollama
model: llama3:8b
base_url: http://localhost:11434
timeout: 120
max_tokens: 4096
guardrails:
environment: lab
strict_mode: false
require_confirmation: true
allowed_targets: []
session:
data_dir: ~/.binstrike/sessions
auto_save_interval: 60
logging:
level: INFO
file: ~/.binstrike/binstrike.log
EOF
# 6. Start Ollama (if using local AI)
ollama serve &
ollama pull llama3:8b
# 7. Run BinStrike CLI
binstrikeSee Full Production Setup section above.
For full tool orchestration when running CLI mode:
# Ubuntu/Debian - Core network tools
sudo apt update
sudo apt install -y nmap nikto gobuster sqlmap hydra john
# ProjectDiscovery tools (requires Go)
go install github.com/projectdiscovery/nuclei/v2/cmd/nuclei@latest
go install github.com/projectdiscovery/subfinder/v2/cmd/subfinder@latest
go install github.com/ffuf/ffuf@latest
# Container security tools
curl -sfL https://raw.githubusercontent.com/aquasecurity/trivy/main/contrib/install.sh | sh -s -- -b /usr/local/bin
curl -sSfL https://raw.githubusercontent.com/anchore/grype/main/install.sh | sh -s -- -b /usr/local/binComplete list of all environment variables:
| Variable | Required | Default | Description |
|---|---|---|---|
| Database | |||
POSTGRES_USER |
Yes | binstrike |
PostgreSQL username |
POSTGRES_PASSWORD |
Yes | - | PostgreSQL password (change this!) |
POSTGRES_DB |
Yes | binstrike |
Database name |
POSTGRES_PORT |
No | 5432 |
PostgreSQL port |
| Neo4j | |||
NEO4J_USER |
Yes | neo4j |
Neo4j username |
NEO4J_PASSWORD |
Yes | - | Neo4j password (change this!) |
NEO4J_HTTP_PORT |
No | 7474 |
Neo4j browser port |
NEO4J_BOLT_PORT |
No | 7687 |
Neo4j Bolt protocol port |
| Redis | |||
REDIS_PASSWORD |
Yes | - | Redis password (change this!) |
REDIS_PORT |
No | 6379 |
Redis port |
| RabbitMQ | |||
RABBITMQ_USER |
Yes | binstrike |
RabbitMQ username |
RABBITMQ_PASSWORD |
Yes | - | RabbitMQ password (change this!) |
RABBITMQ_VHOST |
No | binstrike |
RabbitMQ virtual host |
RABBITMQ_PORT |
No | 5672 |
RabbitMQ AMQP port |
RABBITMQ_MGMT_PORT |
No | 15672 |
RabbitMQ management UI port |
| Authentication | |||
JWT_SECRET |
Yes | - | JWT signing secret (64+ chars, change this!) |
JWT_ALGORITHM |
No | HS256 |
JWT algorithm |
ACCESS_TOKEN_EXPIRE_MINUTES |
No | 30 |
Access token lifetime |
REFRESH_TOKEN_EXPIRE_DAYS |
No | 7 |
Refresh token lifetime |
| AI Providers | |||
AI_PROVIDER |
No | ollama |
AI provider (ollama/anthropic/openai) |
OLLAMA_BASE_URL |
No | http://localhost:11434 |
Ollama server URL |
ANTHROPIC_API_KEY |
No | - | Anthropic API key |
OPENAI_API_KEY |
No | - | OpenAI API key |
| Service Ports | |||
API_GATEWAY_PORT |
No | 8000 |
API gateway port |
SESSION_SERVICE_PORT |
No | 8001 |
Session service port |
TOOL_SERVICE_PORT |
No | 8002 |
Tool service port |
AI_SERVICE_PORT |
No | 8003 |
AI service port |
GRAPH_SERVICE_PORT |
No | 8004 |
Graph service port |
REPORT_SERVICE_PORT |
No | 8005 |
Report service port |
| Other | |||
TOOL_TIMEOUT |
No | 600 |
Tool execution timeout (seconds) |
LOG_LEVEL |
No | INFO |
Logging level |
RATE_LIMIT |
No | 100 |
API rate limit per minute |
1. User sends credentials to /auth/login
2. Server validates credentials
3. If MFA enabled, server requires MFA code
4. Server returns access_token (30 min) + refresh_token (7 days)
5. Client includes access_token in Authorization header
6. When access_token expires, use refresh_token to get new tokens
# Login request
curl -X POST http://localhost:8000/auth/login \
-H "Content-Type: application/json" \
-d '{
"email": "user@example.com",
"password": "YourPassword123!",
"mfa_code": "123456"
}'
# Response
{
"access_token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...",
"refresh_token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...",
"token_type": "Bearer",
"expires_in": 1800,
"user": {
"id": "550e8400-e29b-41d4-a716-446655440000",
"email": "user@example.com",
"role": "operator"
}
}
# Use token in subsequent requests
curl -X GET http://localhost:8000/users/me \
-H "Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9..."# 1. Setup MFA (returns QR code)
curl -X POST http://localhost:8000/auth/mfa/setup \
-H "Authorization: Bearer $TOKEN"
# Response includes:
# - qr_code: Base64 PNG image for Google Authenticator
# - secret: TOTP secret key
# - backup_codes: 10 one-time recovery codes
# 2. Scan QR code with authenticator app
# 3. Verify MFA with code from app
curl -X POST http://localhost:8000/auth/mfa/verify \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{"code": "123456"}'| Role | Permissions |
|---|---|
| Admin | Full system access: manage users, configure system, all operations |
| Lead | Create engagements, assign team members, manage API keys |
| Operator | Execute tools, create findings, view reports, edit engagements |
| Viewer | Read-only access to engagements, findings, and reports |
| API | Programmatic access with scoped permissions |
- Password Security: bcrypt hashing with cost factor 12
- Password Requirements: 8+ chars, uppercase, lowercase, digit, special character
- Account Lockout: 5 failed attempts = 15 minute lockout
- Session Management: Track all sessions with IP/User-Agent
- Token Security: SHA-256 token hashing, Redis blacklist for revoked tokens
- Scope Enforcement: All tool executions validated against engagement scope
| Endpoint | Method | Auth | Description |
|---|---|---|---|
/health |
GET | No | System health check |
/auth/login |
POST | No | User login |
/auth/refresh |
POST | No | Refresh access token |
/auth/logout |
POST | Yes | Logout and invalidate session |
/auth/mfa/setup |
POST | Yes | Setup MFA |
/auth/mfa/verify |
POST | Yes | Verify MFA code |
/users/me |
GET | Yes | Get current user profile |
/users |
GET | Admin | List all users |
/users |
POST | Admin | Create new user |
/engagements |
GET | Yes | List engagements |
/engagements |
POST | Lead+ | Create engagement |
/engagements/{id} |
GET | Yes | Get engagement details |
/tools |
GET | Yes | List available tools |
/tools/{name}/execute |
POST | Operator+ | Execute tool |
/tools/executions/{id} |
GET | Yes | Get execution status |
/ai/ask |
POST | Yes | Query AI assistant |
/ai/recommend |
POST | Yes | Get recommendations |
/graph/summary/{id} |
GET | Yes | Graph summary |
/graph/visualization/{id} |
GET | Yes | D3.js visualization data |
/reports/generate |
POST | Operator+ | Generate report |
/reports/{id}/download |
GET | Yes | Download report file |
-
Ensure BinStrike Flask server is running:
python -m binstrike.server --host 127.0.0.1 --port 8888
-
Edit Claude Desktop config:
- Linux:
~/.config/claude-desktop/config.json - Windows:
%APPDATA%\Claude\config.json
{ "mcpServers": { "binstrike": { "command": "python3", "args": ["-m", "binstrike.mcp.server", "--server", "http://127.0.0.1:8888"], "description": "BinStrike AI - Penetration Testing Platform" } } } - Linux:
-
Restart Claude Desktop
Add to your MCP settings:
{
"mcpServers": {
"binstrike": {
"command": "python3",
"args": ["-m", "binstrike.mcp.server", "--server", "http://127.0.0.1:8888"]
}
}
}| Category | Tools |
|---|---|
| Network Scanning | nmap, rustscan, masscan, naabu, autorecon |
| Web Application | gobuster, feroxbuster, nikto, nuclei, sqlmap, ffuf, wpscan, whatweb |
| OSINT | subfinder, amass, theharvester |
| Authentication | hydra, john, hashcat |
| Container Security | trivy, grype, syft, kube-bench, docker-bench, falco |
| Cloud Security | prowler, scoutsuite, checkov, terrascan |
| SMB/AD | crackmapexec, enum4linux, smbmap |
# Execute nmap scan
curl -X POST http://localhost:8000/tools/nmap/execute \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{
"target": "192.168.1.0/24",
"parameters": {
"ports": "22,80,443,8080",
"scan_type": "sV"
},
"engagement_id": "your-engagement-uuid"
}'
# Response
{
"execution_id": "exec-uuid",
"status": "pending",
"tool_name": "nmap"
}
# Check status
curl -X GET http://localhost:8000/tools/executions/exec-uuid \
-H "Authorization: Bearer $TOKEN"# 1. Login and get token
TOKEN=$(curl -s -X POST http://localhost:8000/auth/login \
-H "Content-Type: application/json" \
-d '{"email":"admin@binstrike.local","password":"Admin@BinStrike2024!"}' \
| jq -r '.access_token')
# 2. Create an engagement
ENGAGEMENT=$(curl -s -X POST http://localhost:8000/engagements \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{"name":"Corporate Pentest Q1","scope":["192.168.1.0/24","example.com"]}' \
| jq -r '.id')
# 3. Run nmap scan
EXEC_ID=$(curl -s -X POST http://localhost:8000/tools/nmap/execute \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d "{\"target\":\"192.168.1.10\",\"engagement_id\":\"$ENGAGEMENT\"}" \
| jq -r '.execution_id')
# 4. Check scan status
curl -s http://localhost:8000/tools/executions/$EXEC_ID \
-H "Authorization: Bearer $TOKEN"
# 5. Get AI recommendations
curl -s -X POST http://localhost:8000/ai/recommend \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{"engagement_id":"'$ENGAGEMENT'","current_phase":"enumeration"}'
# 6. Generate report
curl -s -X POST http://localhost:8000/reports/generate \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{"engagement_id":"'$ENGAGEMENT'","format":"markdown"}'# Install dev dependencies
pip install -e ".[dev]"
# Run production component tests
python tests/test_production_components.py
# Expected output:
# ============================================================
# FINAL SUMMARY
# ============================================================
# Total Tests: 42
# Passed: 42
# Failed: 0
# Success Rate: 100.0%
# ✓ ALL TESTS PASSED!# Start only infrastructure
./start-production.sh dev
# Run services locally for debugging
cd services/session-service && python main.py
cd services/tool-service && python main.py
# etc.# Check logs
./start-production.sh logs
# Check specific service
./start-production.sh logs session-service
# Verify ports are available
ss -tulpn | grep -E '(8000|8001|5432|6379|7687)'# Check PostgreSQL status
docker-compose -f docker-compose.production.yml logs postgres
# Verify connection
docker exec -it binstrike-postgres psql -U binstrike -d binstrike -c "SELECT 1"- Ensure
JWT_SECRETis the same in.env(services read from this) - Token may have expired (default: 30 min)
- Use
/auth/refreshto get a new access token
- Ensure device time is synchronized (TOTP is time-sensitive)
- Codes are valid for ±30 seconds
- Use backup code if authenticator is unavailable
MIT License - See LICENSE file for details.
Built with ❤️ for the security community